Goto

Collaborating Authors

 generalized variational inference


Generalized Variational Inference in Function Spaces: Gaussian Measures meet Bayesian Deep Learning

Neural Information Processing Systems

We develop a framework for generalized variational inference in infinite-dimensional function spaces and use it to construct a method termed Gaussian Wasserstein inference (GWI). GWI leverages the Wasserstein distance between Gaussian measures on the Hilbert space of square-integrable functions in order to determine a variational posterior using a tractable optimization criterion. It avoids pathologies arising in standard variational function space inference. An exciting application of GWI is the ability to use deep neural networks in the variational parametrization of GWI, combining their superior predictive performance with the principled uncertainty quantification analogous to that of Gaussian processes. The proposed method obtains state-of-the-art performance on several benchmark datasets.


Generalized Variational Inference in Function Spaces: Gaussian Measures meet Bayesian Deep Learning

Neural Information Processing Systems

We develop a framework for generalized variational inference in infinite-dimensional function spaces and use it to construct a method termed Gaussian Wasserstein inference (GWI). GWI leverages the Wasserstein distance between Gaussian measures on the Hilbert space of square-integrable functions in order to determine a variational posterior using a tractable optimization criterion. It avoids pathologies arising in standard variational function space inference. An exciting application of GWI is the ability to use deep neural networks in the variational parametrization of GWI, combining their superior predictive performance with the principled uncertainty quantification analogous to that of Gaussian processes. The proposed method obtains state-of-the-art performance on several benchmark datasets.


Empirical Bayes for Dynamic Bayesian Networks Using Generalized Variational Inference

Kungurtsev, Vyacheslav, Apaar, null, Khandelwal, Aarya, Rastogi, Parth Sandeep, Chatterjee, Bapi, Mareček, Jakub

arXiv.org Artificial Intelligence

Dynamic Bayesian Networks (DBNs) are a class of Probabilistic Graphical Models that enable the modeling of a Markovian dynamic process through defining the kernel transition by the DAG structure of the graph found to fit a dataset. There are a number of structure learners than enable one to find the structure of a DBN to fit data, each of which with its own set of particular advantages and disadvantages. The structure of a DBN itself presents transparent criteria in order to identify causal discovery between variables. However, without the presence of large quantities of data, identifying a ground truth causal structure becomes unrealistic in practice. However, one can consider a procedure by which a set of graphs identifying structure are computed as approximate noisy solutions, and subsequently amortized in a broader statistical procedure fitting a mixture of DBNs. Each component of the mixture presents an alternative hypothesis on the causal structure. From the mixture weights, one can also compute the Bayes Factors comparing the preponderance of evidence between different models. This presents a natural opportunity for the development of Empirical Bayesian methods.


Robust Deep Gaussian Processes

Knoblauch, Jeremias

arXiv.org Artificial Intelligence

This report provides an in-depth overview over the implications and novelty Generalized Variational Inference (GVI) (Knoblauch et al., 2019) brings to Deep Gaussian Processes (DGPs) (Damianou & Lawrence, 2013). Specifically, robustness to model misspecification as well as principled alternatives for uncertainty quantification are motivated with an information-geometric view. These modifications have clear interpretations and can be implemented in less than 100 lines of Python code. Most importantly, the corresponding empirical results show that DGPs can greatly benefit from the presented enhancements.